30 research outputs found

    MiceTrap: scalable traffic engineering of datacenter mice flows using OpenFlow

    Get PDF
    Datacenter network topologies are inherently built with enough redundancy to offer multiple paths between pairs of end hosts for increased flexibility and resilience. On top, traffic engineering (TE) methods are needed to utilize the abundance of bisection bandwidth efficiently. Previously proposed TE approaches differentiate between long-lived flows (elephant flows) and short-lived flows (mice flows), using dedicated traffic management techniques to handle elephant flows, while treating mice flows with baseline routing methods. We show through an example that such an approach can cause congestion to short-lived (but not necessarily less critical) flows. To overcome this, we propose MiceTrap, an OpenFlow-based TE approach targeting datacenter mice flows. MiceTrap employs scalability against the number of mice flows through flow aggregation, together with a software-configurable weighted routing algorithm that offers improved load balancing for mice flows

    OFLoad: An OpenFlow-based dynamic load balancing strategy for datacenter networks

    Get PDF
    The latest tremendous growth in the Internet traffic has determined the entry into a new era of mega-datacenters, meant to deal with this explosion of data traffic. However this big data with its dynamically changing traffic patterns and flows might result in degradations of the application performance eventually affecting the network operators’ revenue. In this context there is a need for an intelligent and efficient network management system that makes the best use of the available bisection bandwidth abundance to achieve high utilization and performance. This paper proposes OFLoad, an OpenFlow-based dynamic load balancing strategy for datacenter networks that enables the efficient use of the network resources capacity. A real experimental prototype is built and the proposed solution is compared against other solutions from the literature in terms of load-balancing. The aim of OFLoad is to enable the instant configuration of the network by making the best use of the available resources at the lowest cost and complexity

    dReDBox: Materializing a full-stack rack-scale system prototype of a next-generation disaggregated datacenter

    Get PDF
    Current datacenters are based on server machines, whose mainboard and hardware components form the baseline, monolithic building block that the rest of the system software, middleware and application stack are built upon. This leads to the following limitations: (a) resource proportionality of a multi-tray system is bounded by the basic building block (mainboard), (b) resource allocation to processes or virtual machines (VMs) is bounded by the available resources within the boundary of the mainboard, leading to spare resource fragmentation and inefficiencies, and (c) upgrades must be applied to each and every server even when only a specific component needs to be upgraded. The dRedBox project (Disaggregated Recursive Datacentre-in-a-Box) addresses the above limitations, and proposes the next generation, low-power, across form-factor datacenters, departing from the paradigm of the mainboard-as-a-unit and enabling the creation of function-block-as-a-unit. Hardware-level disaggregation and software-defined wiring of resources is supported by a full-fledged Type-1 hypervisor that can execute commodity virtual machines, which communicate over a low-latency and high-throughput software-defined optical network. To evaluate its novel approach, dRedBox will demonstrate application execution in the domains of network functions virtualization, infrastructure analytics, and real-time video surveillance.This work has been supported in part by EU H2020 ICTproject dRedBox, contract #687632.Peer ReviewedPostprint (author's final draft

    A software-defined architecture and prototype for disaggregated memory rack scale systems

    Get PDF
    Disaggregation and rack-scale systems have the potential of drastically increasing TCO and utilization of cloud datacenters, while maintaining performance. In this paper, we present a novel rack-scale system architecture featuring software-defined remote memory disaggregation. Our hardware design and operating system extensions enable unmodified applications to dynamically attach to memory segments residing on physically remote memory pools and use such remote segments in a byte-addressable manner, as if they were local to the application. Our system features also a control plane that automates software-defined dynamic matching of compute to memory resources, as driven by datacenter workload needs. We prototyped our system on the commercially available Zynq Ultrascale+ MPSoC platform. To our knowledge, this is the first time a software-defined disaggregated system has been prototyped on commercial hardware and evaluated through industry standard software benchmarks. Our initial results - using benchmarks that are artificially highly adversarial in terms of memory bandwidth - show that disaggregated memory access exhibits a round-trip latency of only 134 clock cycles; and a throughput penalty of as low as 55%, relative to locally-attached memory. We also discuss estimations as to how our findings may translate to applications with pragmatically milder memory aggressiveness levels, as well as innovation avenues across the stack opened up by our work

    On interconnecting and orchestrating components in disaggregated data centers:The dReDBox project vision

    Get PDF
    Computing systems servers-low-or high-end ones have been traditionally designed and built using a main-board and its hardware components as a 'hard' monolithic building block; this formed the base unit on which the system hardware and software stack design build upon. This hard deployment and management border on compute, memory, network and storage resources is either fixed or quite limited in expandability during design time and in practice remains so throughout machine lifetime as subsystem upgrades are seldomely employed. The impact of this rigidity has well known ramifications in terms of lower system resource utilization, costly upgrade cycles and degraded energy proportionality. In the dReDBox project we take on the challenge of breaking the server boundaries through materialization of the concept of disaggregation. The basic idea of the dReDBox architecture is to use a core of high-speed, low-latency opto-electronic fabric that will bring physically distant components more closely in terms of latency and bandwidth. We envision a powerful software-defined control plane that will match the flexibility of the system to the resource needs of the applications (or VMs) running in the system. Together the hardware, interconnect, and software architectures will enable the creation of a modular, vertically-integrated system that will form a datacenter-in-a-box

    Disaggregated Compute, Memory and Network Systems: A New Era for Optical Data Centre Architectures

    Get PDF
    The disaggregated dRedBox Data Centre architecture is proposed that enables dynamic allocation of pooled compute and memory resources. An orchestration platform is described and algorithms are simulated that demonstrate the efficient utilization of IT infrastructure

    dRedDbox: Demonstrating disaggregated memory in an optical Data Centre

    Get PDF
    This paper showcases the first experimental demonstration of disaggregated memory using the dRedDbox optical Data Centre architecture. Experimental results demonstrate the 4-tier network scalability and performance of the system at the physical and application layer

    Demonstration of NFV for mobile edge computing on an optically disaggregated datacentre in a box

    Get PDF
    This demonstrator showcases the hardware and software integration achieved by the dReDBox project [1] towards realization of a novel architecture using dynamically-reconfigurable optical interconnects to create a flexible, scalable and efficient disaggregated datacentre infrastructure

    Scalable linux container provisioning in fog and edge computing platforms

    No full text
    The tremendous increase in the number of mobile devices and the proliferation of all kinds of new types of sensors is creating new value opportunities by analyzing, developing insights from, and actuating upon large volumes of data streams generated at the edge of the network. While general purpose processing required to unleash this value is abundant in Cloud datacenters, bringing raw IoT data streams to the Cloud poses critical challenges, including: (i) regulatory constraints related to data sensitivity, (ii) significant bandwidth costs and (iii) latency barriers inhibiting near-real-time applications. Edge Computing aspires to extend the traditional cloud model to the \u201cedge of the network\u201d, to deliver low-latency, bandwidth-efficiencies and controlled privacy. For all the com-monalities between the two models, transitioning the provisioning and orchestration of a distributed analytics platform from Cloud to Edge is not trivial. The two models present totally different cost structures such as price of bandwidth, data communication latency, power density and availability. In this paper, we address the challenge associated with transitioning scalable provisioning from Cloud to distributed Edge platforms. We identify current scalability challenges in Linux container provisioning at the Edge; we propose a novel peer-to-peer model taking on them; we present a prototype of this model designed for and tested on real Edge testbeds, and we report a scalability evaluation on a scale-out virtual-ized platform. Our results demonstrate significant savings in terms of provisioning latency and bandwidth utilization. \ua9 Springer International Publishing AG, part of Springer Nature 2018
    corecore